neural code comprehension
- Europe > Switzerland > Zürich > Zürich (0.05)
- North America > United States > District of Columbia > Washington (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- (4 more...)
Neural Code Comprehension: A Learnable Representation of Code Semantics
With the recent success of embeddings in natural language processing, research has been conducted into applying similar methods to code analysis. Most works attempt to process the code directly or use a syntactic tree representation, treating it like sentences written in a natural language. However, none of the existing methods are sufficient to comprehend program semantics robustly, due to structural features such as function calls, branching, and interchangeable order of statements. In this paper, we propose a novel processing technique to learn code semantics, and apply it to a variety of program analysis tasks. In particular, we stipulate that a robust distributional hypothesis of code applies to both human-and machine-generated programs. Following this hypothesis, we define an embedding space, inst2vec, based on an Intermediate Representation (IR) of the code that is independent of the source programming language. We provide a novel definition of contextual flow for this IR, leveraging both the underlying data-and control-flow of the program. We then analyze the embeddings qualitatively using analogies and clustering, and evaluate the learned representation on three different high-level tasks. We show that even without fine-tuning, a single RNN architecture and fixed inst2vec embeddings outperform specialized approaches for performance prediction (compute device mapping, optimal thread coarsening); and algorithm classification from raw code (104 classes), where we set a new state-of-the-art.
- Europe > Switzerland > Zürich > Zürich (0.05)
- North America > United States > District of Columbia > Washington (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- (4 more...)
Reviews: Neural Code Comprehension: A Learnable Representation of Code Semantics
This paper proposes to use programming code embeddings for individual statements to be used in all kinds of program analysis tasks, such as classification of type of algorithms, placing of code on a heterogeneous cpu-gpu architecture and scheduling of programs on gpus. The novelty of the paper consists of the way the code embeddings are computed/trained. Instead of looking at code statements in high-level languages and following either data or control flow, the paper proposes to use statements at an intermediate-representation level (which are independent of the high-level language used) and take both data and control flow into account. As such, the proposed technique builds "contextual flow graphs" where the nodes are connected either through data or control flow edges. The nodes are variable or label identifiers.
Neural Code Comprehension: A Learnable Representation of Code Semantics
Ben-Nun, Tal, Jakobovits, Alice Shoshana, Hoefler, Torsten
With the recent success of embeddings in natural language processing, research has been conducted into applying similar methods to code analysis. Most works attempt to process the code directly or use a syntactic tree representation, treating it like sentences written in a natural language. However, none of the existing methods are sufficient to comprehend program semantics robustly, due to structural features such as function calls, branching, and interchangeable order of statements. In this paper, we propose a novel processing technique to learn code semantics, and apply it to a variety of program analysis tasks. In particular, we stipulate that a robust distributional hypothesis of code applies to both human- and machine-generated programs.
Neural Code Comprehension: A Learnable Representation of Code Semantics
Ben-Nun, Tal, Jakobovits, Alice Shoshana, Hoefler, Torsten
With the recent success of embeddings in natural language processing, research has been conducted into applying similar methods to code analysis. Most works attempt to process the code directly or use a syntactic tree representation, treating it like sentences written in a natural language. However, none of the existing methods are sufficient to comprehend program semantics robustly, due to structural features such as function calls, branching, and interchangeable order of statements. In this paper, we propose a novel processing technique to learn code semantics, and apply it to a variety of program analysis tasks. In particular, we stipulate that a robust distributional hypothesis of code applies to both human- and machine-generated programs. Following this hypothesis, we define an embedding space, inst2vec, based on an Intermediate Representation (IR) of the code that is independent of the source programming language. We provide a novel definition of contextual flow for this IR, leveraging both the underlying data- and control-flow of the program. We then analyze the embeddings qualitatively using analogies and clustering, and evaluate the learned representation on three different high-level tasks. We show that even without fine-tuning, a single RNN architecture and fixed inst2vec embeddings outperform specialized approaches for performance prediction (compute device mapping, optimal thread coarsening); and algorithm classification from raw code (104 classes), where we set a new state-of-the-art.
- North America > United States > New York > New York County > New York City (0.15)
- Europe > Switzerland > Zürich > Zürich (0.05)
- North America > United States > District of Columbia > Washington (0.04)
- (5 more...)
- Information Technology > Software > Programming Languages (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
Neural Code Comprehension: A Learnable Representation of Code Semantics
Ben-Nun, Tal, Jakobovits, Alice Shoshana, Hoefler, Torsten
With the recent success of embeddings in natural language processing, research has been conducted into applying similar methods to code analysis. Most works attempt to process the code directly or use a syntactic tree representation, treating it like sentences written in a natural language. However, none of the existing methods are sufficient to comprehend program semantics robustly, due to structural features such as function calls, branching, and interchangeable order of statements. In this paper, we propose a novel processing technique to learn code semantics, and apply it to a variety of program analysis tasks. In particular, we stipulate that a robust distributional hypothesis of code applies to both human- and machine-generated programs. Following this hypothesis, we define an embedding space, inst2vec, based on an Intermediate Representation (IR) of the code that is independent of the source programming language. We provide a novel definition of contextual flow for this IR, leveraging both the underlying data- and control-flow of the program. We then analyze the embeddings qualitatively using analogies and clustering, and evaluate the learned representation on three different high-level tasks. We show that even without fine-tuning, a single RNN architecture and fixed inst2vec embeddings outperform specialized approaches for performance prediction (compute device mapping, optimal thread coarsening); and algorithm classification from raw code (104 classes), where we set a new state-of-the-art.
- North America > United States > New York > New York County > New York City (0.15)
- Europe > Switzerland > Zürich > Zürich (0.05)
- North America > United States > District of Columbia > Washington (0.04)
- (5 more...)
- Information Technology > Software > Programming Languages (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
Neural Code Comprehension: A Learnable Representation of Code Semantics
Ben-Nun, Tal, Jakobovits, Alice Shoshana, Hoefler, Torsten
With the recent success of embeddings in natural language processing, research has been conducted into applying similar methods to code analysis. Most works attempt to process the code directly or use a syntactic tree representation, treating it like sentences written in a natural language. However, none of the existing methods are sufficient to comprehend program semantics robustly, due to structural features such as function calls, branching, and interchangeable order of statements. In this paper, we propose a novel processing technique to learn code semantics, and apply it to a variety of program analysis tasks. In particular, we stipulate that a robust distributional hypothesis of code applies to both human- and machine-generated programs. Following this hypothesis, we define an embedding space, inst2vec, based on an Intermediate Representation (IR) of the code that is independent of the source programming language. We provide a novel definition of contextual flow for this IR, leveraging both the underlying data- and control-flow of the program. We then analyze the embeddings qualitatively using analogies and clustering, and evaluate the learned representation on three different high-level tasks. We show that with a single RNN architecture and pre-trained fixed embeddings, inst2vec outperforms specialized approaches for performance prediction (compute device mapping, optimal thread coarsening); and algorithm classification from raw code (104 classes), where we set a new state-of-the-art.
- North America > United States > New York > New York County > New York City (0.15)
- Europe > Switzerland > Zürich > Zürich (0.05)
- North America > United States > District of Columbia > Washington (0.04)
- (4 more...)